High Performance Computing for Computational Biology - Session Introduction
نویسندگان
چکیده
Computational biology has become a fully multidisciplinary eld, including components of mathematics, biology, chemistry, and computer science. Computational biology is essentially the computer-aided analysis of the biology of organisms. Since even a single genome or proteome contains an immense quantity of data, performing even a simple analysis on genome-scale data quickly turns into a computationally diÆcult problem. Thus, computational biology now requires high-performance computing and its related components in database systems, visualization, and computer engineering. The focus of this session is computational biology's growing need for highperformance computing. The rst purely computational steps of the human genome project have required vast amounts of computing equipment with, for example, large processor farms being used in both the private and public assemblies of the human genome. To the over-optimistic researcher, the simplest way to gain a factor of 10 performance increase is to use 10 processors. In practice, many critical issues degrade performance. Communication between the processing elements takes additional time, as does partitioning the problem and recombining the answers. Some algorithms cannot even be e ectively partitioned. As seen in some of the papers in this session, often the most signi cant overhead is the additional time required to adapt an algorithm to a high-performance system. The session received 26 papers, 11 of which the conference chairs selected
منابع مشابه
Session S3A THE UCSC KESTREL HIGH PERFORMANCE SIMD PROCESSOR: PRESENT AND FUTURE
The UCSC Kestrel parallel processor is a single-board linear array processor with 512 8-bit processing elements. In the process of building the machine, we have touched nearly all aspects of computer engineering, from VLSI layout to board design and debugging, and from device drivers to new algorithm development. The programmable array is primarily designed for several core algorithms from comp...
متن کاملSession Introduction
This marks the first time that the Pacific Symposium in Biocomputing has hosted a session specifically devoted to the emerging computational needs of metabolomics. Metabolomics, or metabonomics as it is sometimes called, is a relatively new field of “omics” research concerned with the high-throughput identification and quantification of the small molecule metabolites in the metabolome (i.e. the...
متن کاملStability Assessment Metamorphic Approach (SAMA) for Effective Scheduling based on Fault Tolerance in Computational Grid
Grid Computing allows coordinated and controlled resource sharing and problem solving in multi-institutional, dynamic virtual organizations. Moreover, fault tolerance and task scheduling is an important issue for large scale computational grid because of its unreliable nature of grid resources. Commonly exploited techniques to realize fault tolerance is periodic Checkpointing that periodically ...
متن کاملSession Introduction
High-throughput proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. These high-throughput technological advances are fueling a revolution in biology, enabling analyses at the scale of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understa...
متن کاملDistributed, High-Performance and Grid Computing in Computational Biology , International Workshop, GCCB 2006, Eilat, Israel, January 21, 2007, Proceeding
What do you do to start reading distributed high performance and grid computing in computational biology? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone wil...
متن کامل